Integration of data processed via BATCH
This page will help you continue with your MQTT integration
To continue with the integration via BATCH
-
You can choose to receive the .csv or .parquet file via AWS S3, Google Storage, Azure or direct-link (in this case you will need to have a webhook to receive the files and save them).
-
You will need to register your credentials depending on the type of integration you chose. To do this, you will need to log in, get the access token and save the credentials using our endpoints. We recommend using Swagger itself to log in and save the credentials. Alternatively, you can use tools such as Postman/Insomnia, or via cURL. More details on how to do this on the pages:
-
We need to define a period for sending the files, for example every 4 hours, every 12 hours, among others.
-
After that, you will start receiving your data in the agreed period, format and storage.
-
You can check some questions about the parquet format: How the .parquet format works.
-
You can check examples of .parquet files: Examples of .parquet files.
-
You can check example of alarm file: Example of alarm file
-
You can check Water Timeline Export - Documentation: Water Timeline Export - Documentation